home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
ftp.cs.arizona.edu
/
ftp.cs.arizona.edu.tar
/
ftp.cs.arizona.edu
/
icon
/
newsgrp
/
group98a.txt
/
000005_icon-group-sender _Wed Jan 21 16:41:53 1998.msg
< prev
next >
Wrap
Internet Message Format
|
2000-09-20
|
2KB
Return-Path: <icon-group-sender>
Received: from kingfisher.CS.Arizona.EDU (kingfisher.CS.Arizona.EDU [192.12.69.239])
by baskerville.CS.Arizona.EDU (8.8.7/8.8.7) with SMTP id QAA12726
for <icon-group-addresses@baskerville.CS.Arizona.EDU>; Wed, 21 Jan 1998 16:41:53 -0700 (MST)
Received: by kingfisher.CS.Arizona.EDU (5.65v4.0/1.1.8.2/08Nov94-0446PM)
id AA30302; Wed, 21 Jan 1998 16:41:53 -0700
To: icon-group@optima.CS.Arizona.EDU
Date: Wed, 21 Jan 1998 16:52:45 GMT
From: evans@gte.net (MJE)
Message-Id: <6a596h$gt3$1@gte2.gte.net>
Organization: None
Sender: icon-group-request@optima.CS.Arizona.EDU
Subject: Shannon-theoretic Language Approximators
Errors-To: icon-group-errors@optima.CS.Arizona.EDU
Status: RO
Content-Length: 1277
It appears to me that this newsgroup does not have much traffic, so please feel
free to relay this message to any other parties capable of answering it.
I am wondering whether anyone has written a random-text generator in the Icon
language of the sort that is described in the book "An Introduction to
Information Theory : Symbols, Signals and Noise" by John Robinson Pierce
(paperback; @ US$7.16 from http://www.amazon.com).
This book is a kind of layman's overview of Shannon's information theory. It
describes Shannon's work at a conceptual level.
One of Shannon's studies involved the generation of random words that
correspond, in a statistical/probabilistic sense, to English. The text is
meaningless, but because it corresponds to the statistics of English, it can
serve as a basis for studying the transmission of English prose. In principle,
the technique applies to any other language as well.
MORE GENERALLY: I would be interested in any Icon implementations of language
statistics. Examples: counting frequencies of characters in a block of text,
counting word frequencies in a block of text, examining symmetries in poetry,
computing estimated probabilities of particular sequences of characters.
Thank you so very much,
Mark Evans <evans@gte.net>